478 research outputs found

    Axial plane optical microscopy.

    Get PDF
    We present axial plane optical microscopy (APOM) that can, in contrast to conventional microscopy, directly image a sample's cross-section parallel to the optical axis of an objective lens without scanning. APOM combined with conventional microscopy simultaneously provides two orthogonal images of a 3D sample. More importantly, APOM uses only a single lens near the sample to achieve selective-plane illumination microscopy, as we demonstrated by three-dimensional (3D) imaging of fluorescent pollens and brain slices. This technique allows fast, high-contrast, and convenient 3D imaging of structures that are hundreds of microns beneath the surfaces of large biological tissues

    Preparation of berberine hydrochloride long-circulating liposomes by ionophore A23187-mediated ZnSO4 gradient method

    Get PDF
    AbstractThe aim of the study was to prepare berberine hydrochloride long-circulating liposomes and optimize the formulation and process parameters, and investigate the influence of different factors on the encapsulation efficiency. Berberine hydrochloride liposomes were prepared in response to a transmembrane ion gradient that was established by ionophore A23187. Free and liposomal drug were separated by cation exchange resin, and then the amount of intraliposomal berberine hydrochloride was determined by UV spectrophotometry. The optimized encapsulation efficiency of berberine hydrochloride liposomes was 94.3% ± 2.1% when the drug-to-lipid ratio was 1:20, and the mean diameter was 146.9 nm ± 3.2 nm. As a result, the ionophore A23187-mediated ZnSO4 gradient method was suitable for the preparation of berberine hydrochloride liposomes that we could get the desired encapsulation efficiency and drug loading

    Molecular Quantum Dot Cellular Automata Based on Diboryl Monoradical Anions

    Get PDF
    Field-effect transistor (FET)-based microelectronics is approaching its size limit due to unacceptable power dissipation and short-channel effects. Molecular quantum dot cellular automata (MQCA) is a promising transistorless paradigm that encodes binary information with bistable charge configurations instead of currents and voltages. However, it still remains a challenge to find appropriate candidate molecules for MQCA operation. Inspired by recent progress in boron radical chemistry, we theoretically predicted a series of new MQCA candidates built from diboryl monoradical anions. The unpaired electron resides mainly on one boron center and can be shifted to the other by an electrostatic stimulus, forming bistable charge configurations required by MQCA. By investigating various bridge units with different substitutions (ortho-, meta-, and para-), we suggested several candidate molecules that have potential MQCA applications

    ApproxTrain: Fast Simulation of Approximate Multipliers for DNN Training and Inference

    Full text link
    Edge training of Deep Neural Networks (DNNs) is a desirable goal for continuous learning; however, it is hindered by the enormous computational power required by training. Hardware approximate multipliers have shown their effectiveness for gaining resource-efficiency in DNN inference accelerators; however, training with approximate multipliers is largely unexplored. To build resource efficient accelerators with approximate multipliers supporting DNN training, a thorough evaluation of training convergence and accuracy for different DNN architectures and different approximate multipliers is needed. This paper presents ApproxTrain, an open-source framework that allows fast evaluation of DNN training and inference using simulated approximate multipliers. ApproxTrain is as user-friendly as TensorFlow (TF) and requires only a high-level description of a DNN architecture along with C/C++ functional models of the approximate multiplier. We improve the speed of the simulation at the multiplier level by using a novel LUT-based approximate floating-point (FP) multiplier simulator on GPU (AMSim). ApproxTrain leverages CUDA and efficiently integrates AMSim into the TensorFlow library, in order to overcome the absence of native hardware approximate multiplier in commercial GPUs. We use ApproxTrain to evaluate the convergence and accuracy of DNN training with approximate multipliers for small and large datasets (including ImageNet) using LeNets and ResNets architectures. The evaluations demonstrate similar convergence behavior and negligible change in test accuracy compared to FP32 and bfloat16 multipliers. Compared to CPU-based approximate multiplier simulations in training and inference, the GPU-accelerated ApproxTrain is more than 2500x faster. Based on highly optimized closed-source cuDNN/cuBLAS libraries with native hardware multipliers, the original TensorFlow is only 8x faster than ApproxTrain.Comment: 14 pages, 12 figure
    • …
    corecore